34 research outputs found

    Data-Driven Learning of a Union of Sparsifying Transforms Model for Blind Compressed Sensing

    Full text link
    Compressed sensing is a powerful tool in applications such as magnetic resonance imaging (MRI). It enables accurate recovery of images from highly undersampled measurements by exploiting the sparsity of the images or image patches in a transform domain or dictionary. In this work, we focus on blind compressed sensing (BCS), where the underlying sparse signal model is a priori unknown, and propose a framework to simultaneously reconstruct the underlying image as well as the unknown model from highly undersampled measurements. Specifically, our model is that the patches of the underlying image(s) are approximately sparse in a transform domain. We also extend this model to a union of transforms model that better captures the diversity of features in natural images. The proposed block coordinate descent type algorithms for blind compressed sensing are highly efficient, and are guaranteed to converge to at least the partial global and partial local minimizers of the highly non-convex BCS problems. Our numerical experiments show that the proposed framework usually leads to better quality of image reconstructions in MRI compared to several recent image reconstruction methods. Importantly, the learning of a union of sparsifying transforms leads to better image reconstructions than a single adaptive transform.Comment: Appears in IEEE Transactions on Computational Imaging, 201

    Learning Multi-Layer Transform Models

    Full text link
    Learned data models based on sparsity are widely used in signal processing and imaging applications. A variety of methods for learning synthesis dictionaries, sparsifying transforms, etc., have been proposed in recent years, often imposing useful structures or properties on the models. In this work, we focus on sparsifying transform learning, which enjoys a number of advantages. We consider multi-layer or nested extensions of the transform model, and propose efficient learning algorithms. Numerical experiments with image data illustrate the behavior of the multi-layer transform learning algorithm and its usefulness for image denoising. Multi-layer models provide better denoising quality than single layer schemes.Comment: In Proceedings of the Annual Allerton Conference on Communication, Control, and Computing, 201

    β„“0\ell_0 Sparsifying Transform Learning with Efficient Optimal Updates and Convergence Guarantees

    Full text link
    Many applications in signal processing benefit from the sparsity of signals in a certain transform domain or dictionary. Synthesis sparsifying dictionaries that are directly adapted to data have been popular in applications such as image denoising, inpainting, and medical image reconstruction. In this work, we focus instead on the sparsifying transform model, and study the learning of well-conditioned square sparsifying transforms. The proposed algorithms alternate between a β„“0\ell_0 "norm"-based sparse coding step, and a non-convex transform update step. We derive the exact analytical solution for each of these steps. The proposed solution for the transform update step achieves the global minimum in that step, and also provides speedups over iterative solutions involving conjugate gradients. We establish that our alternating algorithms are globally convergent to the set of local minimizers of the non-convex transform learning problems. In practice, the algorithms are insensitive to initialization. We present results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising.Comment: Accepted to IEEE Transactions on Signal Processin

    Efficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to MRI

    Full text link
    Natural signals and images are well-known to be approximately sparse in transform domains such as Wavelets and DCT. This property has been heavily exploited in various applications in image processing and medical imaging. Compressed sensing exploits the sparsity of images or image patches in a transform domain or synthesis dictionary to reconstruct images from undersampled measurements. In this work, we focus on blind compressed sensing, where the underlying sparsifying transform is a priori unknown, and propose a framework to simultaneously reconstruct the underlying image as well as the sparsifying transform from highly undersampled measurements. The proposed block coordinate descent type algorithms involve highly efficient optimal updates. Importantly, we prove that although the proposed blind compressed sensing formulations are highly nonconvex, our algorithms are globally convergent (i.e., they converge from any initialization) to the set of critical points of the objectives defining the formulations. These critical points are guaranteed to be at least partial global and partial local minimizers. The exact point(s) of convergence may depend on initialization. We illustrate the usefulness of the proposed framework for magnetic resonance image reconstruction from highly undersampled k-space measurements. As compared to previous methods involving the synthesis dictionary model, our approach is much faster, while also providing promising reconstruction quality.Comment: This work has been accepted for publication in the SIAM Journal on Imaging Sciences. It also appears in Saiprasad Ravishankar's PhD thesis, that was deposited with the University of Illinois on December 05, 201

    Analysis of Fast Alternating Minimization for Structured Dictionary Learning

    Full text link
    Methods exploiting sparsity have been popular in imaging and signal processing applications including compression, denoising, and imaging inverse problems. Data-driven approaches such as dictionary learning and transform learning enable one to discover complex image features from datasets and provide promising performance over analytical models. Alternating minimization algorithms have been particularly popular in dictionary or transform learning. In this work, we study the properties of alternating minimization for structured (unitary) sparsifying operator learning. While the algorithm converges to the stationary points of the non-convex problem in general, we prove rapid local linear convergence to the underlying generative model under mild assumptions. Our experiments show that the unitary operator learning algorithm is robust to initialization

    FRIST - Flipping and Rotation Invariant Sparsifying Transform Learning and Applications

    Full text link
    Features based on sparse representation, especially using the synthesis dictionary model, have been heavily exploited in signal processing and computer vision. However, synthesis dictionary learning typically involves NP-hard sparse coding and expensive learning steps. Recently, sparsifying transform learning received interest for its cheap computation and its optimal updates in the alternating algorithms. In this work, we develop a methodology for learning Flipping and Rotation Invariant Sparsifying Transforms, dubbed FRIST, to better represent natural images that contain textures with various geometrical directions. The proposed alternating FRIST learning algorithm involves efficient optimal updates. We provide a convergence guarantee, and demonstrate the empirical convergence behavior of the proposed FRIST learning approach. Preliminary experiments show the promising performance of FRIST learning for sparse image representation, segmentation, denoising, robust inpainting, and compressed sensing-based magnetic resonance image reconstruction.Comment: Published in Inverse Problem

    Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) - The β„“0\ell_0 Method

    Full text link
    The sparsity of natural signals and images in a transform domain or dictionary has been extensively exploited in several applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise in many applications compared to fixed or analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. In this work, we investigate an efficient method for β„“0\ell_{0} "norm"-based dictionary learning by first approximating the training data set with a sum of sparse rank-one matrices and then using a block coordinate descent approach to estimate the unknowns. The proposed block coordinate descent algorithm involves efficient closed-form solutions. In particular, the sparse coding step involves a simple form of thresholding. We provide a convergence analysis for the proposed block coordinate descent approach. Our numerical experiments show the promising performance and significant speed-ups provided by our method over the classical K-SVD scheme in sparse signal representation and image denoising.Comment: This work is cited by the IEEE Transactions on Computational Imaging Paper arXiv:1511.06333 (DOI: 10.1109/TCI.2017.2697206

    Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    Full text link
    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speed-ups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.Comment: Accepted to IEEE Transactions on Computational Imaging. This paper also cites experimental results reported in arXiv:1511.0884

    Supervised Learning of Sparsity-Promoting Regularizers for Denoising

    Full text link
    We present a method for supervised learning of sparsity-promoting regularizers for image denoising. Sparsity-promoting regularization is a key ingredient in solving modern image reconstruction problems; however, the operators underlying these regularizers are usually either designed by hand or learned from data in an unsupervised way. The recent success of supervised learning (mainly convolutional neural networks) in solving image reconstruction problems suggests that it could be a fruitful approach to designing regularizers. As a first experiment in this direction, we propose to denoise images using a variational formulation with a parametric, sparsity-promoting regularizer, where the parameters of the regularizer are learned to minimize the mean squared error of reconstructions on a training set of (ground truth image, measurement) pairs. Training involves solving a challenging bilievel optimization problem; we derive an expression for the gradient of the training loss using Karush-Kuhn-Tucker conditions and provide an accompanying gradient descent algorithm to minimize it. Our experiments on a simple synthetic, denoising problem show that the proposed method can learn an operator that outperforms well-known regularizers (total variation, DCT-sparsity, and unsupervised dictionary learning) and collaborative filtering. While the approach we present is specific to denoising, we believe that it can be adapted to the whole class of inverse problems with linear measurement models, giving it applicability to a wide range of image reconstruction problems

    DECT-MULTRA: Dual-Energy CT Image Decomposition With Learned Mixed Material Models and Efficient Clustering

    Full text link
    Dual energy computed tomography (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Image-domain decomposition operates directly on CT images using linear matrix inversion, but the decomposed material images can be severely degraded by noise and artifacts. This paper proposes a new method dubbed DECT-MULTRA for image-domain DECT material decomposition that combines conventional penalized weighted-least squares (PWLS) estimation with regularization based on a mixed union of learned transforms (MULTRA) model. Our proposed approach pre-learns a union of common-material sparsifying transforms from patches extracted from all the basis materials, and a union of cross-material sparsifying transforms from multi-material patches. The common-material transforms capture the common properties among different material images, while the cross-material transforms capture the cross-dependencies. The proposed PWLS formulation is optimized efficiently by alternating between an image update step and a sparse coding and clustering step, with both of these steps having closed-form solutions. The effectiveness of our method is validated with both XCAT phantom and clinical head data. The results demonstrate that our proposed method provides superior material image quality and decomposition accuracy compared to other competing methods
    corecore